人类表演的法律文件中句子的注释是许多基于机器学习的系统支持法律任务的重要先决条件。通常,注释是按顺序完成的,句子句子通常是耗时的,因此昂贵。在本文中,我们介绍了一个概念验证系统,用于横向注释句子。该方法基于观察到含义类似的句子通常在特定类型系统方面具有相同的标签。我们在允许注释器中使用此观察来快速查看和注释在整个文档语料库中使用与给定句子的语义类似的句子。在这里,我们介绍了系统的界面并经验评估方法。实验表明,横向注释有可能使注释过程更快,更加一致。
translated by 谷歌翻译
在本文中,我们研究了多语言句子嵌入的使用,以转移跨管辖区,法律制度(普通和民法),语言和域名的审判决策功能分割的预测模型(即语境)。利用原始环境之外的语言资源的机制在AI和法律中具有显着的潜在利益,因为法律制度,语言或传统之间的差异往往阻碍了更广泛的研究结果。我们使用跨语言可转换的门控复发单元(GRUS)分析使用语言无话句子表示的使用。调查不同背景之间的转移,我们开发了一种审判决策功能分割的注释方案。我们发现模特超出了他们接受培训的背景(例如,在美国的行政决定上培训的模型可以应用于意大利的刑法决定)。此外,我们发现在多种上下文上培训模型增加了鲁棒性并在评估先前看不见的上下文时提高整体性能。最后,我们发现,从所有上下文中汇集训练数据增强了模型的上下文性能。
translated by 谷歌翻译
我们分析预先训练的语言模型在使用不同类型系统注释的数据集中传输知识的能力,并概括在域名和数据集之外,他们接受了培训。我们创建了一个元任务,在多个数据集上集中于预测修辞角色。在案例决策中扮演句子扮演的修辞角色的预测是AI&法律中的重要且经常学习的任务。通常,它需要批注大量句子来训练模型,这可能是耗时和昂贵的。此外,模型的应用受到培训的相同数据集。我们微调语言模型并在数据集中评估它们的性能,以研究模型的拓展域的能力。我们的结果表明,该方法可以有助于克服主动或Interactie学习中的冷启动问题,并显示模型跨越数据集和域的能力。
translated by 谷歌翻译
在本文中,我们提出了一种以布尔搜索规则的形式构建强大可解释的分类器的方法。我们开发了一个互动的环境,称为案例(计算机辅助语义探索),它利用Word Co-Instionrence在选择相关搜索条件时引导人类的注释器。该系统无缝促进迭代评估和改进分类规则。该过程使人类注入者能够利用统计信息的好处,同时将其专家直接纳入这些规则的创建。我们评估在4个数据集中使用我们的案例系统创建的分类器,并将结果与​​机器学习方法进行比较,包括Skope规则,随机林,支持向量机和FastText分类器。结果推动了关于布尔搜索规则的卓越紧凑性,简单性和直观之间的权衡的讨论与文本分类的最先进的机器学习模型的更好性能。
translated by 谷歌翻译
Coronary Computed Tomography Angiography (CCTA) provides information on the presence, extent, and severity of obstructive coronary artery disease. Large-scale clinical studies analyzing CCTA-derived metrics typically require ground-truth validation in the form of high-fidelity 3D intravascular imaging. However, manual rigid alignment of intravascular images to corresponding CCTA images is both time consuming and user-dependent. Moreover, intravascular modalities suffer from several non-rigid motion-induced distortions arising from distortions in the imaging catheter path. To address these issues, we here present a semi-automatic segmentation-based framework for both rigid and non-rigid matching of intravascular images to CCTA images. We formulate the problem in terms of finding the optimal \emph{virtual catheter path} that samples the CCTA data to recapitulate the coronary artery morphology found in the intravascular image. We validate our co-registration framework on a cohort of $n=40$ patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our results indicate that our non-rigid registration significantly outperforms other co-registration approaches for luminal bifurcation alignment in both longitudinal (mean mismatch: 3.3 frames) and rotational directions (mean mismatch: 28.6 degrees). By providing a differentiable framework for automatic multi-modal intravascular data fusion, our developed co-registration modules significantly reduces the manual effort required to conduct large-scale multi-modal clinical studies while also providing a solid foundation for the development of machine learning-based co-registration approaches.
translated by 谷歌翻译
Artificial intelligence(AI) systems based on deep neural networks (DNNs) and machine learning (ML) algorithms are increasingly used to solve critical problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNN or ML models that are unavoidably opaque and perceived as black-box methods, may not be able to explain why and how they make certain decisions. Such black-box models are difficult to comprehend not only for targeted users and decision-makers but also for AI developers. Besides, in sensitive areas like healthcare, explainability and accountability are not only desirable properties of AI but also legal requirements -- especially when AI may have significant impacts on human lives. Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models and make it possible to interpret how AI systems make their decisions with transparency. An interpretable ML model can explain how it makes predictions and which factors affect the model's outcomes. The majority of state-of-the-art interpretable ML methods have been developed in a domain-agnostic way and originate from computer vision, automated reasoning, or even statistics. Many of these methods cannot be directly applied to bioinformatics problems, without prior customization, extension, and domain adoption. In this paper, we discuss the importance of explainability with a focus on bioinformatics. We analyse and comprehensively overview of model-specific and model-agnostic interpretable ML methods and tools. Via several case studies covering bioimaging, cancer genomics, and biomedical text mining, we show how bioinformatics research could benefit from XAI methods and how they could help improve decision fairness.
translated by 谷歌翻译
Physics-Informed Neural Networks (PINNs) are gaining popularity as a method for solving differential equations. While being more feasible in some contexts than the classical numerical techniques, PINNs still lack credibility. A remedy for that can be found in Uncertainty Quantification (UQ) which is just beginning to emerge in the context of PINNs. Assessing how well the trained PINN complies with imposed differential equation is the key to tackling uncertainty, yet there is lack of comprehensive methodology for this task. We propose a framework for UQ in Bayesian PINNs (B-PINNs) that incorporates the discrepancy between the B-PINN solution and the unknown true solution. We exploit recent results on error bounds for PINNs on linear dynamical systems and demonstrate the predictive uncertainty on a class of linear ODEs.
translated by 谷歌翻译
We show that for a plane imaged by an endoscope the specular isophotes are concentric circles on the scene plane, which appear as nested ellipses in the image. We show that these ellipses can be detected and used to estimate the plane's normal direction, forming a normal reconstruction method, which we validate on simulated data. In practice, the anatomical surfaces visible in endoscopic images are locally planar. We use our method to show that the surface normal can thus be reconstructed for each of the numerous specularities typically visible on moist tissues. We show results on laparoscopic and colonoscopic images.
translated by 谷歌翻译
A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations, which spreads through collected data. When not properly accounted for, machine learning (ML) models learned from data can reinforce the structural biases already present in society. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches show regularly biased behaviors. However, we show that standard mitigation techniques, and our own post-hoc method, can be effective in reducing the level of unfair bias. We provide practical recommendations to develop ML models for depression risk prediction with increased fairness and trust in the real world. No single best ML model for depression prediction provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions.
translated by 谷歌翻译
Word order, an essential property of natural languages, is injected in Transformer-based neural language models using position encoding. However, recent experiments have shown that explicit position encoding is not always useful, since some models without such feature managed to achieve state-of-the art performance on some tasks. To understand better this phenomenon, we examine the effect of removing position encodings on the pre-training objective itself (i.e., masked language modelling), to test whether models can reconstruct position information from co-occurrences alone. We do so by controlling the amount of masked tokens in the input sentence, as a proxy to affect the importance of position information for the task. We find that the necessity of position information increases with the amount of masking, and that masked language models without position encodings are not able to reconstruct this information on the task. These findings point towards a direct relationship between the amount of masking and the ability of Transformers to capture order-sensitive aspects of language using position encoding.
translated by 谷歌翻译